Relation extraction (RE), which has relied on structurally annotated corpora for model training, has been particularly challenging in low-resource scenarios and domains. Recent literature has tackled low-resource RE by self-supervised learning, where the solution involves pretraining the relation embedding by RE-based objective and finetuning on labeled data by classification-based objective. However, a critical challenge to this approach is the gap in objectives, which prevents the RE model from fully utilizing the knowledge in pretrained representations. In this paper, we aim at bridging the gap and propose to pretrain and finetune the RE model using consistent objectives of contrastive learning. Since in this kind of representation learning paradigm, one relation may easily form multiple clusters in the representation space, we further propose a multi-center contrastive loss that allows one relation to form multiple clusters to better align with pretraining. Experiments on two document-level RE datasets, BioRED and Re-DocRED, demonstrate the effectiveness of our method. Particularly, when using 1% end-task training data, our method outperforms PLM-based RE classifier by 10.5% and 5.8% on the two datasets, respectively.
translated by 谷歌翻译
我们为指定实体识别(NER)提出了一个有效的双重编码框架,该框架将对比度学习用于映射候选文本跨度,并将实体类型映射到同一矢量表示空间中。先前的工作主要将NER作为序列标记或跨度分类。相反,我们将NER视为一个度量学习问题,它最大程度地提高了实体提及的向量表示之间的相似性及其类型。这使得易于处理嵌套和平坦的ner,并且可以更好地利用嘈杂的自我诉讼信号。 NER对本双重编码器制定的主要挑战在于将非实体跨度与实体提及分开。我们没有明确标记所有非实体跨度为外部(O)与大多数先前方法相同的类别(O),而是引入了一种新型的动态阈值损失,这与标准的对比度损失一起学习。实验表明,我们的方法在受到监督和远处有监督的设置中的表现良好(例如,Genia,NCBI,BC5CDR,JNLPBA)。
translated by 谷歌翻译
生物医学中的多模式数据遍布,例如放射学图像和报告。大规模解释这些数据对于改善临床护理和加速临床研究至关重要。与一般领域相比,具有复杂语义的生物医学文本在视觉建模中提出了其他挑战,并且先前的工作使用了缺乏特定领域语言理解的适应性模型不足。在本文中,我们表明,有原则的文本语义建模可以大大改善自我监督的视力 - 语言处理中的对比度学习。我们发布了一种实现最先进的语言模型,从而通过改进的词汇和新颖的语言预测客观的客观利用语义和话语特征在放射学报告中获得了自然语言推断。此外,我们提出了一种自我监督的联合视觉 - 语言方法,重点是更好的文本建模。它在广泛的公开基准上建立了新的最新结果,部分是通过利用我们新的特定领域的语言模型。我们释放了一个新的数据集,该数据集具有放射科医生的局部对齐短语接地注释,以促进生物医学视觉处理中复杂语义建模的研究。广泛的评估,包括在此新数据集中,表明我们的对比学习方法在文本语义建模的帮助下,尽管仅使用了全球对准目标,但在细分任务中的表现都优于细分任务中的先验方法。
translated by 谷歌翻译
实体联系面临着重大的挑战,例如多产的变化和普遍的歧义,特别是在具有无数实体的高价值领域。标准分类方法遭受注释瓶颈,无法有效处理看不见的实体。零拍实体链接已成为概括的方向,以概括新实体,但它仍然需要在所有实体的培训和规范描述期间提到示例,这两者都很少在维基百科外面可用。在本文中,我们通过利用易于提供的域知识来探索实体链接的知识丰富的自我监督($ \ tt kriss $)。在培训中,它会使用域本体进行未标记的文本生成自我监控的提到示例,并使用对比学习列举一个上下文编码器。出于推理,它将自我监督的提到作为每个实体的原型,并通过将测试提及映射到最相似的原型来进行链接。我们的方法归入零拍摄和少量拍摄方法,并且可以轻松地包含实体说明和黄金如果可用的标签。使用Biomedicine作为案例研究,我们对跨越生物医学文献和临床票据的七个标准数据集进行了广泛的实验。不使用任何标记信息,我们的方法为400万UMLS实体提供$ \ TT Krissbert $,这是一个Uncer Intity Linker,它可以获得新的艺术状态,优先于先前的自我监督方法,高度为20多个绝对点。
translated by 谷歌翻译
动机:生物医学研究人员和临床从业者的常年挑战是随着出版物和医疗票据的快速增长而待的。自然语言处理(NLP)已成为驯服信息超载的有希望的方向。特别是,大型神经语言模型通过预先绘制的文本预测,通过各种NLP应用中的BERT模型的成功示例,便于通过预先绘制的预先来进行学习。然而,用于结束任务的微调此类模型仍然具有挑战性,特别是具有小标记数据集,这些数据集是生物医学NLP的常见。结果:我们对生物医学NLP的微调稳定性进行了系统研究。我们表明FineTuning性能可能对预先预订的设置敏感,尤其是在低资源域中。大型型号有可能获得更好的性能,但越来越多的模型大小也加剧了FineTuning不稳定性。因此,我们对解决微调不稳定的技术进行了全面的探索。我们表明,这些技术可以大大提高低源生物医学NLP应用的微调性能。具体地,冻结下层有助于标准伯特基型号,而完整的衰减对于BERT-LARD和Electra型号更有效。对于低资源文本相似性任务,如生物,重新初始化顶层是最佳策略。总体而言,占星型词汇和预制促进更强大的微调模型。基于这些调查结果,我们在广泛的生物医学NLP应用方面建立了新的技术。可用性和实施​​:为了促进生物医学NLP的进展,我们释放了我们最先进的预订和微调模型:https://aka.ms/blurb。
translated by 谷歌翻译
When a human communicates with a machine using natural language on the web and online, how can it understand the human's intention and semantic context of their talk? This is an important AI task as it enables the machine to construct a sensible answer or perform a useful action for the human. Meaning is represented at the sentence level, identification of which is known as intent detection, and at the word level, a labelling task called slot filling. This dual-level joint task requires innovative thinking about natural language and deep learning network design, and as a result, many approaches and models have been proposed and applied. This tutorial will discuss how the joint task is set up and introduce Spoken Language Understanding/Natural Language Understanding (SLU/NLU) with Deep Learning techniques. We will cover the datasets, experiments and metrics used in the field. We will describe how the machine uses the latest NLP and Deep Learning techniques to address the joint task, including recurrent and attention-based Transformer networks and pre-trained models (e.g. BERT). We will then look in detail at a network that allows the two levels of the task, intent classification and slot filling, to interact to boost performance explicitly. We will do a code demonstration of a Python notebook for this model and attendees will have an opportunity to watch coding demo tasks on this joint NLU to further their understanding.
translated by 谷歌翻译
Most TextVQA approaches focus on the integration of objects, scene texts and question words by a simple transformer encoder. But this fails to capture the semantic relations between different modalities. The paper proposes a Scene Graph based co-Attention Network (SceneGATE) for TextVQA, which reveals the semantic relations among the objects, Optical Character Recognition (OCR) tokens and the question words. It is achieved by a TextVQA-based scene graph that discovers the underlying semantics of an image. We created a guided-attention module to capture the intra-modal interplay between the language and the vision as a guidance for inter-modal interactions. To make explicit teaching of the relations between the two modalities, we proposed and integrated two attention modules, namely a scene graph-based semantic relation-aware attention and a positional relation-aware attention. We conducted extensive experiments on two benchmark datasets, Text-VQA and ST-VQA. It is shown that our SceneGATE method outperformed existing ones because of the scene graph and its attention modules.
translated by 谷歌翻译
Scene Graph Generation (SGG) serves a comprehensive representation of the images for human understanding as well as visual understanding tasks. Due to the long tail bias problem of the object and predicate labels in the available annotated data, the scene graph generated from current methodologies can be biased toward common, non-informative relationship labels. Relationship can sometimes be non-mutually exclusive, which can be described from multiple perspectives like geometrical relationships or semantic relationships, making it even more challenging to predict the most suitable relationship label. In this work, we proposed the SG-Shuffle pipeline for scene graph generation with 3 components: 1) Parallel Transformer Encoder, which learns to predict object relationships in a more exclusive manner by grouping relationship labels into groups of similar purpose; 2) Shuffle Transformer, which learns to select the final relationship labels from the category-specific feature generated in the previous step; and 3) Weighted CE loss, used to alleviate the training bias caused by the imbalanced dataset.
translated by 谷歌翻译
协作过滤问题通常是基于矩阵完成技术来解决的,该技术恢复了用户项目交互矩阵的缺失值。在矩阵中,额定位置专门表示给定的用户和额定值。以前的矩阵完成技术倾向于忽略矩阵中每个元素(用户,项目和评分)的位置,但主要关注用户和项目之间的语义相似性,以预测矩阵中缺少的值。本文提出了一种新颖的位置增强的用户/项目表示培训模型,用于推荐,Super-Rec。我们首先使用相对位置评级编码并存储位置增强的额定信息及其用户项目与嵌入的固定尺寸,而不会受矩阵大小影响。然后,我们将受过训练的位置增强用户和项目表示形式应用于最简单的传统机器学习模型,以突出我们表示模型的纯粹新颖性。我们对建议域中的位置增强项目表示形式进行了首次正式介绍和定量分析,并对我们的Super-Rec进行了原则性的讨论,以表现优于典型的协作过滤推荐任务,并具有明确的和隐式反馈。
translated by 谷歌翻译
基于文本的游戏(TBG)是复杂的环境,允许用户或计算机代理进行文本交互并实现游戏目标。为基于文本的游戏构建面向目标的计算机代理是一项挑战,尤其是当我们使用逐步反馈作为模型的唯一文本输入时。此外,代理商很难通过从更大的文本输入空间中评估灵活的长度和形式。在本文中,我们对应用于基于文本的游戏字段的深度学习方法进行了广泛的分析。
translated by 谷歌翻译